Sequential recommendation is an important task to predict the next-item to access based on a sequence of interacted items. Most existing works learn user preference as the transition pattern from the previous item to the next one, ignoring the time interval between these two items. However, we observe that the time interval in a sequence may vary significantly different, and thus result in the ineffectiveness of user modeling due to the issue of \emph{preference drift}. In fact, we conducted an empirical study to validate this observation, and found that a sequence with uniformly distributed time interval (denoted as uniform sequence) is more beneficial for performance improvement than that with greatly varying time interval. Therefore, we propose to augment sequence data from the perspective of time interval, which is not studied in the literature. Specifically, we design five operators (Ti-Crop, Ti-Reorder, Ti-Mask, Ti-Substitute, Ti-Insert) to transform the original non-uniform sequence to uniform sequence with the consideration of variance of time intervals. Then, we devise a control strategy to execute data augmentation on item sequences in different lengths. Finally, we implement these improvements on a state-of-the-art model CoSeRec and validate our approach on four real datasets. The experimental results show that our approach reaches significantly better performance than the other 11 competing methods. Our implementation is available: https://github.com/KingGugu/TiCoSeRec.
translated by 谷歌翻译
经典算法通常对于解决非障碍最小值的非凸优化问题通常无效。在本文中,我们通过利用量子隧道的全局效应来探讨非凸优化的量子加速。具体而言,我们引入了一种称为量子隧道步行(QTW)的量子算法,并将其应用于局部最小值大约全局最小值的非凸问题。我们表明,当不同局部最小值较高但薄且最小值平坦时,QTW在经典随机梯度下降(SGD)上实现了量子加速。基于此观察结果,我们构建了一个特定的双孔景观,其中经典算法无法有效地击中一个目标,但是QTW可以在已知井附近提供适当的初始状态时可以很好地击中一个目标。最后,我们通过数值实验证实了我们的发现。
translated by 谷歌翻译
使用嘈杂标签(LNL)学习旨在设计策略来通过减轻模型过度适应嘈杂标签的影响来提高模型性能和概括。 LNL的主要成功在于从大量嘈杂数据中识别尽可能多的干净样品,同时纠正错误分配的嘈杂标签。最近的进步采用了单个样品的预测标签分布来执行噪声验证和嘈杂的标签校正,很容易产生确认偏差。为了减轻此问题,我们提出了邻里集体估计,其中通过将其与其功能空间最近的邻居进行对比,重新估计了候选样本的预测性可靠性。具体而言,我们的方法分为两个步骤:1)邻域集体噪声验证,将所有训练样品分为干净或嘈杂的子集,2)邻里集体标签校正到Relabel嘈杂样品,然后使用辅助技术来帮助进一步的模型优化。 。在四个常用基准数据集(即CIFAR-10,CIFAR-100,Clothing-1M和WebVision-1.0)上进行了广泛的实验,这表明我们提出的方法非常优于最先进的方法。
translated by 谷歌翻译
经过嘈杂标签训练的深层模型很容易在概括中过度拟合和挣扎。大多数现有的解决方案都是基于理想的假设,即标签噪声是类条件,即同一类的实例共享相同的噪声模型,并且独立于特征。在实践中,现实世界中的噪声模式通常更为细粒度作为实例依赖性,这构成了巨大的挑战,尤其是在阶层间失衡的情况下。在本文中,我们提出了一种两阶段的干净样品识别方法,以应对上述挑战。首先,我们采用类级特征聚类程序,以早期识别在班级预测中心附近的干净样品。值得注意的是,我们根据稀有类的预测熵来解决类不平衡问题。其次,对于接近地面真相类边界的其余清洁样品(通常与样品与实例有关的噪声混合),我们提出了一种基于一致性的新型分类方法,该方法使用两个分类器头的一致性来识别它们:一致性越高,样品清洁的可能性就越大。对几个具有挑战性的基准进行了广泛的实验,证明了我们的方法与最先进的方法相比。
translated by 谷歌翻译
基于内部语言模型估计(ILME)语言模型(LM)融合已显示出明显改善的识别结果,而识别域内和跨域语音识别任务的常规浅融合。在本文中,我们试图将ILME方法应用于跨域代码转换语音识别(CSSR)工作。具体而言,我们的好奇心来自几个方面。首先,我们很好奇基于ILME的LM融合对内域和跨域CSSR任务的有效性。我们在不合并两个代码转换域的情况下对此进行验证。更重要的是,我们通过合并两个单语言数据集训练端到端(E2E)语音识别模型,并观察到拟议的基于ILME的LM Fusion对CSSR的功效。来自东南亚和另一个中国大陆CS数据集的SEAME的实验结果证明了拟议的基于ILME的LM融合方法的有效性。
translated by 谷歌翻译
尽管深入学习算法已被深入开发用于计算机辅助结核病诊断(CTD),但它们主要依赖于精心注释的数据集,从而导致了大量时间和资源消耗。弱监督的学习(WSL)利用粗粒标签来完成精细的任务,具有解决此问题的潜力。在本文中,我们首先提出了一个新的大规模结核病(TB)胸部X射线数据集,即结核病胸部X射线属性数据集(TBX-ATT),然后建立一个属性辅助的弱点监督的框架来分类并通过利用属性信息来克服WSL方案中的监督不足来定位结核病。具体而言,首先,TBX-ATT数据集包含2000个X射线图像,其中具有七种用于TB关系推理的属性,这些属性由经验丰富的放射科医生注释。它还包括带有11200 X射线图像的公共TBX11K数据集,以促进弱监督检测。其次,我们利用一个多尺度特征交互模型,用于TB区域分类和属性关系推理检测。在TBX-ATT数据集上评估了所提出的模型,并将作为未来研究的稳固基准。代码和数据将在https://github.com/gangmingzhao/tb-attribute-weak-localization上获得。
translated by 谷歌翻译
Explaining machine learning models is an important and increasingly popular area of research interest. The Shapley value from game theory has been proposed as a prime approach to compute feature importance towards model predictions on images, text, tabular data, and recently graph neural networks (GNNs) on graphs. In this work, we revisit the appropriateness of the Shapley value for GNN explanation, where the task is to identify the most important subgraph and constituent nodes for GNN predictions. We claim that the Shapley value is a non-ideal choice for graph data because it is by definition not structure-aware. We propose a Graph Structure-aware eXplanation (GStarX) method to leverage the critical graph structure information to improve the explanation. Specifically, we define a scoring function based on a new structure-aware value from the cooperative game theory proposed by Hamiache and Navarro (HN). When used to score node importance, the HN value utilizes graph structures to attribute cooperation surplus between neighbor nodes, resembling message passing in GNNs, so that node importance scores reflect not only the node feature importance, but also the node structural roles. We demonstrate that GStarX produces qualitatively more intuitive explanations, and quantitatively improves explanation fidelity over strong baselines on chemical graph property prediction and text graph sentiment classification.
translated by 谷歌翻译
我们建议在监督回归方案中学习一个不变的因果预测因子,该预测因素对分布变化是可靠的。基于描述潜在数据生成过程的分离的因果分解,我们将分布转移归因于生成因子的突变,该突变涵盖了各种分布转移的情况,因为我们没有在因果结构或因果结构的来源或源突变。在此因果框架下,我们根据操作员确定一组不变预测变量。我们提供了足够的必要条件,使预测变量是最佳最佳的预测因子,即最大程度地减少所有领域中最坏的二次二次损失。在马尔可夫人和忠诚的假设下,这种情况是合理的,因此启发了一种实用算法以识别最佳预测指标。对于经验估计,我们提出了一个以当地因果发现程序为指导的置换式恢复计划。我们方法的效用和有效性在模拟数据和两个现实世界中得到了证明:阿尔茨海默氏病的诊断和基因功能预测。
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work. However, domain discrepancies in low-level image statistics and high-level contexts compromise the segmentation performance over the target domain. A key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly. Unfortunately, there is a lack of such unified approaches for UDA tasks in the existing literature. This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation. Concretely, for image-level domain shifts, we propose a global photometric alignment module and a global texture alignment module that align images in the source and target domains in terms of image-level properties. For feature-level domain shifts, we perform global manifold alignment by projecting pixel features from both domains onto the feature manifold of the source domain; and we further regularize category centers in the source domain through a category-oriented triplet loss and perform target domain consistency regularization over augmented target domain images. Experimental results demonstrate that our pipeline significantly outperforms previous methods. In the commonly tested GTA5$\rightarrow$Cityscapes task, our proposed method using Deeplab V3+ as the backbone surpasses previous SOTA by 8%, achieving 58.2% in mIoU.
translated by 谷歌翻译
Recent advances in self-supervised learning (SSL) in computer vision are primarily comparative, whose goal is to preserve invariant and discriminative semantics in latent representations by comparing siamese image views. However, the preserved high-level semantics do not contain enough local information, which is vital in medical image analysis (e.g., image-based diagnosis and tumor segmentation). To mitigate the locality problem of comparative SSL, we propose to incorporate the task of pixel restoration for explicitly encoding more pixel-level information into high-level semantics. We also address the preservation of scale information, a powerful tool in aiding image understanding but has not drawn much attention in SSL. The resulting framework can be formulated as a multi-task optimization problem on the feature pyramid. Specifically, we conduct multi-scale pixel restoration and siamese feature comparison in the pyramid. In addition, we propose non-skip U-Net to build the feature pyramid and develop sub-crop to replace multi-crop in 3D medical imaging. The proposed unified SSL framework (PCRLv2) surpasses its self-supervised counterparts on various tasks, including brain tumor segmentation (BraTS 2018), chest pathology identification (ChestX-ray, CheXpert), pulmonary nodule detection (LUNA), and abdominal organ segmentation (LiTS), sometimes outperforming them by large margins with limited annotations.
translated by 谷歌翻译